3D point cloud registration is a fundamental problem in computer vision and robotics. Recently, learning-based point cloud registration methods have made great progress. However, these methods are sensitive to outliers, which lead to more incorrect correspondences. In this paper, we propose a novel deep graph matching-based framework for point cloud registration. Specifically, we first transform point clouds into graphs and extract deep features for each point. Then, we develop a module based on deep graph matching to calculate a soft correspondence matrix. By using graph matching, not only the local geometry of each point but also its structure and topology in a larger range are considered in establishing correspondences, so that more correct correspondences are found. We train the network with a loss directly defined on the correspondences, and in the test stage the soft correspondences are transformed into hard one-to-one correspondences so that registration can be performed by a correspondence-based solver. Furthermore, we introduce a transformer-based method to generate edges for graph construction, which further improves the quality of the correspondences. Extensive experiments on object-level and scene-level benchmark datasets show that the proposed method achieves state-of-the-art performance. The code is available at: \href{https://github.com/fukexue/RGM}{https://github.com/fukexue/RGM}.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
Deep learning methods have contributed substantially to the rapid advancement of medical image segmentation, the quality of which relies on the suitable design of loss functions. Popular loss functions, including the cross-entropy and dice losses, often fall short of boundary detection, thereby limiting high-resolution downstream applications such as automated diagnoses and procedures. We developed a novel loss function that is tailored to reflect the boundary information to enhance the boundary detection. As the contrast between segmentation and background regions along the classification boundary naturally induces heterogeneity over the pixels, we propose the piece-wise two-sample t-test augmented (PTA) loss that is infused with the statistical test for such heterogeneity. We demonstrate the improved boundary detection power of the PTA loss compared to benchmark losses without a t-test component.
translated by 谷歌翻译
预先训练的图像文本模型(如剪辑)已经证明了从大规模的Web收集的图像文本数据中学到的视觉表示的强大力量。鉴于学习良好的视觉特征,一些现有的作品将图像表示转移到视频域并取得良好的结果。但是,如何利用图像语言预训练的模型(例如,剪辑)进行视频培训(后培训)仍在探索。在本文中,我们研究了两个问题:1)阻碍后期剪辑的因素是什么因素,以进一步提高视频语言任务的性能? 2)如何减轻这些因素的影响?通过一系列比较实验和分析,我们发现语言源之间的数据量表和域间隙具有很大的影响。由这些动机,我们提出了一种配备了视频代理机制的Omnisource跨模式学习方法,即剪辑,即剪辑VIP。广泛的结果表明,我们的方法可以提高视频检索的剪辑的性能。我们的模型还可以在包括MSR-VTT,DIDEMO,LSMDC和ActivityNet在内的各种数据集上实现SOTA结果。我们在https://github.com/microsoft/xpretrain/tree/main/main/main/clip-vip上发布了代码和预训练的剪辑模型。
translated by 谷歌翻译
半监督语义分割的流行方法主要采用了使用卷积神经网络(CNN)(CNN)的统一网络模型,并在应用于输入或模型的小型扰动上实施模型预测的一致性。但是,这种学习范式受到a)基于CNN模型的学习能力有限; b)学习未标记数据的判别特征的能力有限; c)从整个图像中对全球和本地信息的学习有限。在本文中,我们提出了一种新型的半监督学习方法,称为Transformer-CNN队列(TCC),该方法由两个基于视觉变压器(VIT)的学生组成,另一种是基于CNN的学生。我们的方法巧妙地通过伪标记来纳入预测和异质特征空间上的多级一致性正则化,用于未标记的数据。首先,由于VIT学生的输入是图像贴片,因此特征地图提取了编码至关重要的类统计。为此,我们建议首先利用每个学生作为伪标签并生成类吸引功能(CF)映射的班级感知功能一致性蒸馏(CFCD)。然后,它通过学生之间的CF地图传输知识。其次,随着VIT学生对所有层具有更统一的表示,我们提出一致性感知的交叉蒸馏以在类像素方面的预测之间转移知识。我们在CityScapes和Pascal VOC 2012数据集上验证了TCC框架,该数据集大大优于现有的半监督方法。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
贝叶斯优化(BO)是解决昂贵优化问题的典型方法。在BO的每次迭代中,使用先前评估的解决方案训练了高斯工艺(GP)模型。然后,推荐下一个用于昂贵评估的候选解决方案,通过在训练有素的替代模型上最大化廉价评估的采集功能。采集函数在优化过程中起着至关重要的作用。但是,每个采集函数都有自己的优势和劣势,没有任何单一的获取功能能够一致地在各种问题上胜过其他功能。为了更好地利用不同采集功能的优势,我们为批处理提出了一种新方法。在每次迭代中,三个采集函数都是根据其当前和历史性能动态选择的,以形成多目标优化问题(MOP)。使用进化多目标算法来优化这种拖把,可以获得一组非主导的解决方案。为了选择批处理解决方案,我们根据它们在三个采集函数上的相对性能将这些非主导的解决方案对几层进行排名。经验结果表明,所提出的方法与有关不同问题的最新方法具有竞争力。
translated by 谷歌翻译
异质的面部识别(HFR)旨在匹配不同域(例如,可见到近红外图像)的面孔,该面孔已被广泛应用于身份验证和取证方案。但是,HFR是一个具有挑战性的问题,因为跨域差异很大,异质数据对有限和面部属性变化很大。为了应对这些挑战,我们从异质数据增强的角度提出了一种新的HFR方法,该方法称为面部合成,具有身份 - 属性分解(FSIAD)。首先,身份属性分解(IAD)将图像截取到与身份相关的表示和与身份无关的表示(称为属性)中,然后降低身份和属性之间的相关性。其次,我们设计了一个面部合成模块(FSM),以生成大量具有分离的身份和属性的随机组合的图像,以丰富合成图像的属性多样性。原始图像和合成图像均被用于训练HFR网络,以应对挑战并提高HFR的性能。在五个HFR数据库上进行的广泛实验验证了FSIAD的性能比以前的HFR方法更高。特别是,FSIAD以vr@far = 0.01%在LAMP-HQ上获得了4.8%的改善,这是迄今为止最大的HFR数据库。
translated by 谷歌翻译
眼底摄影是诊断和监测眼部疾病的诊所的常规检查。但是,对于白内障患者,底眼图像始终会遭受由云晶状体引起的质量降解。降解阻止了眼科医生或计算机辅助系统可靠的诊断。为了提高临床诊断的确定性,已经提出了恢复算法来提高眼底图像的质量。不幸的是,这些算法的部署仍然存在挑战,例如收集足够的培训数据和保存视网膜结构。在本文中,为了规避严格的部署要求,从共享相同结构的合成数据中开发出了针对白内障底底图像的结构一致的恢复网络(SCR-NET)。白内障仿真模型首先是设计用于收集由白内障底面图像共享相同结构的合成性白内障集(SC)的。然后从SCS中提取高频组件(HFC)以约束结构一致性,从而强制执行SCR-NET中的结构保留。该实验证明了SCR-NET与最新方法和后续临床应用的比较中的有效性。该代码可从https://github.com/liamheng/arcnet-medical-image-enhancement获得。
translated by 谷歌翻译